747 research outputs found

    The evolution of bits and bottlenecks in a scientific workflow trying to keep up with technology: Accelerating 4D image segmentation applied to nasa data

    Get PDF
    In 2016, a team of earth scientists directly engaged a team of computer scientists to identify cyberinfrastructure (CI) approaches that would speed up an earth science workflow. This paper describes the evolution of that workflow as the two teams bridged CI and an image segmentation algorithm to do large scale earth science research. The Pacific Research Platform (PRP) and The Cognitive Hardware and Software Ecosystem Community Infrastructure (CHASE-CI) resources were used to significantly decreased the earth science workflow's wall-clock time from 19.5 days to 53 minutes. The improvement in wall-clock time comes from the use of network appliances, improved image segmentation, deployment of a containerized workflow, and the increase in CI experience and training for the earth scientists. This paper presents a description of the evolving innovations used to improve the workflow, bottlenecks identified within each workflow version, and improvements made within each version of the workflow, over a three-year time period

    Analyzing Transatlantic Network Traffic over Scientific Data Caches

    Full text link
    Large scientific collaborations often share huge volumes of data around the world. Consequently a significant amount of network bandwidth is needed for data replication and data access. Users in the same region may possibly share resources as well as data, especially when they are working on related topics with similar datasets. In this work, we study the network traffic patterns and resource utilization for scientific data caches connecting European networks to the US. We explore the efficiency of resource utilization, especially for network traffic which consists mostly of transatlantic data transfers, and the potential for having more caching node deployments. Our study shows that these data caches reduced network traffic volume by 97% during the study period. This demonstrates that such caching nodes are effective in reducing wide-area network traffic

    A Study of B0 -> J/psi K(*)0 pi+ pi- Decays with the Collider Detector at Fermilab

    Get PDF
    We report a study of the decays B0 -> J/psi K(*)0 pi+ pi-, which involve the creation of a u u-bar or d d-bar quark pair in addition to a b-bar -> c-bar(c s-bar) decay. The data sample consists of 110 1/pb of p p-bar collisions at sqrt{s} = 1.8 TeV collected by the CDF detector at the Fermilab Tevatron collider during 1992-1995. We measure the branching ratios to be BR(B0 -> J/psi K*0 pi+ pi-) = (8.0 +- 2.2 +- 1.5) * 10^{-4} and BR(B0 -> J/psi K0 pi+ pi-) = (1.1 +- 0.4 +- 0.2) * 10^{-3}. Contributions to these decays are seen from psi(2S) K(*)0, J/psi K0 rho0, J/psi K*+ pi-, and J/psi K1(1270)

    The Future of High Energy Physics Software and Computing

    Full text link
    Software and Computing (S&C) are essential to all High Energy Physics (HEP) experiments and many theoretical studies. The size and complexity of S&C are now commensurate with that of experimental instruments, playing a critical role in experimental design, data acquisition/instrumental control, reconstruction, and analysis. Furthermore, S&C often plays a leading role in driving the precision of theoretical calculations and simulations. Within this central role in HEP, S&C has been immensely successful over the last decade. This report looks forward to the next decade and beyond, in the context of the 2021 Particle Physics Community Planning Exercise ("Snowmass") organized by the Division of Particles and Fields (DPF) of the American Physical Society.Comment: Computational Frontier Report Contribution to Snowmass 2021; 41 pages, 1 figure. v2: missing ref and added missing topical group conveners. v3: fixed typo

    Cross Section Measurements of High-pTp_T Dilepton Final-State Processes Using a Global Fitting Method

    Get PDF
    We present a new method for studying high-pTp_T dilepton events (e±ee^{\pm}e^{\mp}, μ±μ\mu^{\pm}\mu^{\mp}, e±μe^{\pm}\mu^{\mp}) and simultaneously extracting the production cross sections of ppˉttˉp\bar{p} \to t\bar{t}, ppˉW+Wp\bar{p} \to W^+W^-, and p\bar{p} \to \ztt at a center-of-mass energy of s=1.96\sqrt{s} = 1.96 TeV. We perform a likelihood fit to the dilepton data in a parameter space defined by the missing transverse energy and the number of jets in the event. Our results, which use 360pb1360 {\rm pb^{-1}} of data recorded with the CDF II detector at the Fermilab Tevatron Collider, are σ(ttˉ)=8.52.2+2.7\sigma(t\bar{t}) = 8.5_{-2.2}^{+2.7} pb, σ(W+W)=16.34.4+5.2\sigma(W^+W^-) = 16.3^{+5.2}_{-4.4} pb, and \sigma(\ztt) =291^{+50}_{-46} pb.Comment: 20 pages, 2 figures, to be submitted to PRD-R

    Search for a Higgs Boson Produced in Association with a W Boson in pbar-p Collisions at sqrt{s} = 1.96 TeV

    Get PDF
    We present a search for a standard model Higgs boson produced in association with a W boson using 2.7 1/fb of integrated luminosity of pbar-p collision data taken at sqrt{s} = 1.96 TeV. Limits on the Higgs boson production rate are obtained for masses between 100 GeV and 150 GeV. Through the use of multivariate techniques, the analysis achieves an observed (expected) 95% confidence level upper limit of 5.6 (4.8) times the theoretically expected production cross section for a standard model Higgs boson with a mass of 115 GeV.Comment: submitted to Phys. Rev. Let

    Limits on Anomalous Triple Gauge Couplings in ppbar Collisions at sqrt{s}=1.96 TeV

    Get PDF
    We present a search for anomalous triple gauge couplings (ATGC) in WW and WZ boson production. The boson pairs are produced in ppbar collisions at sqrt{s}=1.96 TeV, and the data sample corresponds to 350 pb-1 of integrated luminosity collected with the CDF II detector at the Fermilab Tevatron. In this search one W decays to leptons, and the other boson (W or Z) decays hadronically. Combining with a previously published CDF measurement of Wgamma boson production yields ATGC limits of -0.18 < lambda < 0.17 and -0.46 < Delta kappa < 0.39 at the 95% confidence level, using a cut-off scale Lambda=1.5 TeV.Comment: 7 pages, 3 figures. Submitted to Phys. Rev.

    Measurement of the Z/gamma* + b-jet cross section in pp collisions at 7 TeV

    Get PDF
    The production of b jets in association with a Z/gamma* boson is studied using proton-proton collisions delivered by the LHC at a centre-of-mass energy of 7 TeV and recorded by the CMS detector. The inclusive cross section for Z/gamma* + b-jet production is measured in a sample corresponding to an integrated luminosity of 2.2 inverse femtobarns. The Z/gamma* + b-jet cross section with Z/gamma* to ll (where ll = ee or mu mu) for events with the invariant mass 60 < M(ll) < 120 GeV, at least one b jet at the hadron level with pT > 25 GeV and abs(eta) < 2.1, and a separation between the leptons and the jets of Delta R > 0.5 is found to be 5.84 +/- 0.08 (stat.) +/- 0.72 (syst.) +(0.25)/-(0.55) (theory) pb. The kinematic properties of the events are also studied and found to be in agreement with the predictions made by the MadGraph event generator with the parton shower and the hadronisation performed by PYTHIA.Comment: Submitted to the Journal of High Energy Physic

    Compressed representation of a partially defined integer function over multiple arguments

    Get PDF
    In OLAP (OnLine Analitical Processing) data are analysed in an n-dimensional cube. The cube may be represented as a partially defined function over n arguments. Considering that often the function is not defined everywhere, we ask: is there a known way of representing the function or the points in which it is defined, in a more compact manner than the trivial one
    corecore